神经压缩算法通常基于需要专门编码器和解码器体系结构的自动编码器,以实现不同的数据模式。在本文中,我们提出了Coin ++,这是一种神经压缩框架,无缝处理广泛的数据模式。我们的方法基于将数据转换为隐式神经表示,即映射坐标(例如像素位置)为特征(例如RGB值)的神经函数。然后,我们不用直接存储隐式神经表示的权重,而是存储应用于元学习的基础网络作为数据的压缩代码的调制。我们进一步量化和熵代码这些调制,从而导致大量压缩增益,同时与基线相比,将编码时间缩短了两个数量级。我们通过压缩从图像和音频到医学和气候数据的各种数据方式来证明我们方法的有效性。
translated by 谷歌翻译
It is common practice in deep learning to represent a measurement of the world on a discrete grid, e.g. a 2D grid of pixels. However, the underlying signal represented by these measurements is often continuous, e.g. the scene depicted in an image. A powerful continuous alternative is then to represent these measurements using an implicit neural representation, a neural function trained to output the appropriate measurement value for any input spatial location. In this paper, we take this idea to its next level: what would it take to perform deep learning on these functions instead, treating them as data? In this context we refer to the data as functa, and propose a framework for deep learning on functa. This view presents a number of challenges around efficient conversion from data to functa, compact representation of functa, and effectively solving downstream tasks on functa. We outline a recipe to overcome these challenges and apply it to a wide range of data modalities including images, 3D shapes, neural radiance fields (NeRF) and data on manifolds. We demonstrate that this approach has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification. Code: https://github.com/deepmind/functa
translated by 谷歌翻译
We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent. To address these limitations, we introduce Augmented Neural ODEs which, in addition to being more expressive models, are empirically more stable, generalize better and have a lower computational cost than Neural ODEs.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Visual SLAM -- Simultaneous Localization and Mapping -- in dynamic environments typically relies on identifying and masking image features on moving objects to prevent them from negatively affecting performance. Current approaches are suboptimal: they either fail to mask objects when needed or, on the contrary, mask objects needlessly. Thus, we propose a novel SLAM that learns when masking objects improves its performance in dynamic scenarios. Given a method to segment objects and a SLAM, we give the latter the ability of Temporal Masking, i.e., to infer when certain classes of objects should be masked to maximize any given SLAM metric. We do not make any priors on motion: our method learns to mask moving objects by itself. To prevent high annotations costs, we created an automatic annotation method for self-supervised training. We constructed a new dataset, named ConsInv, which includes challenging real-world dynamic sequences respectively indoors and outdoors. Our method reaches the state of the art on the TUM RGB-D dataset and outperforms it on KITTI and ConsInv datasets.
translated by 谷歌翻译
基于草图的图像检索(SBIR)是检索与语义和手绘草图查询的空间配置相匹配的自然图像(照片)的任务。草图的普遍性扩大了可能的应用程序的范围,并增加了对有效SBIR解决方案的需求。在本文中,我们研究了经典的基于三胞胎的SBIR解决方案,并表明对水平翻转(即使在模型登录之后)的持续不变性也损害了性能。为了克服这一限制,我们提出了几种方法,并深入评估它们每个方法以检查其有效性。我们的主要贡献是双重的:我们提出并评估几种直观的修改,以构建具有更好的翻转均衡性的SBIR解决方案。我们表明,视觉变压器更适合SBIR任务,并且它们的优于CNN的优于较大的CNN。我们进行了许多实验,并引入了第一个模型,以优于大规模SBIR基准(粗略)的人类表现。与以前的最新方法相比,我们的最佳模型在粗略的基准测试中达到了62.25%(在k = 1)的召回率为46.2%。
translated by 谷歌翻译
3D反向工程是一个备受追捧的人,但在计算机辅助设计(CAD)行业中却没有完全实现。目的是恢复CAD模型的施工历史。从CAD模型的边界表示(B-REP)开始,本文提出了一个新的深神经网络CADOPS-NET,该网络共同学习了CAD操作类型和分解为不同的CAD操作步骤。这种联合学习允许将B-REP划分为在同一施工步骤中由各种CAD操作创建的部分;因此,提供相关信息以进一步恢复设计历史记录。此外,我们提出了新颖的CC3D-OPS数据集,其中包括带有CAD操作类型标签和步骤标签注释的37K $ CAD型号。与现有数据集相比,CC3D-OPS模型的复杂性和种类更接近用于工业目的的模型。我们对拟议的CC3D-OPS和公开融合360数据集进行的实验证明了Cadops-NET相对于最先进的竞争性能,并确认了CAD操作类型和步骤联合学习的重要性。
translated by 谷歌翻译
从3D部分纹理扫描中重建3D人体形状仍然是许多计算机视觉和图形应用程序的基本任务 - 例如,身体动画和虚拟敷料。我们提出了一种新的神经网络体系结构,用于3D身体形状和高分辨率纹理完成-BCOM-NET,可以重建从中级到高级部分输入扫描的完整几何形状。我们将整个重建任务分解为两个阶段 - 首先,一个联合隐式学习网络(SCOM-NET和TCOM-NET),该网络将进行体素化扫描及其占用网格作为重建全身形状并预测顶点纹理的输入。其次,一个高分辨率的纹理完成网络,利用预测的粗顶点纹理来注入部分“纹理图集”的缺失部分。对3DBodyTex.V2数据集进行了彻底的实验评估表明,我们的方法在最先进的情况下取得了竞争成果,同时概括了不同类型和部分形状的水平。所提出的方法在2022年尖锐的挑战1-Track1中也排名第二。
translated by 谷歌翻译
大型和性能的神经网络通常过度参数化,并且由于修剪而可以大大降低大小和复杂性。修剪是一组方法,它试图消除网络中的冗余或不必要的权重或权重。这些技术允许创建轻型网络,这对于嵌入式或移动应用程序特别重要。在本文中,我们设计了一种替代修剪方法,允许从较大未训练的方法中提取有效的子网。我们的方法是随机的,并通过探索使用Gumbel SoftMax采样的不同拓扑来提取子网。后者还用于训练概率分布,以衡量样品中权重的相关性。使用高效的重新恢复机制进一步增强了最终的子网,从而减少训练时间并提高性能。在CIFAR上进行的广泛实验表明,针对相关工作,我们的子网络提取方法的表现要优于表现。
translated by 谷歌翻译
This work aims at generating captions for soccer videos using deep learning. In this context, this paper introduces a dataset, model, and triple-level evaluation. The dataset consists of 22k caption-clip pairs and three visual features (images, optical flow, inpainting) for ~500 hours of \emph{SoccerNet} videos. The model is divided into three parts: a transformer learns language, ConvNets learn vision, and a fusion of linguistic and visual features generates captions. The paper suggests evaluating generated captions at three levels: syntax (the commonly used evaluation metrics such as BLEU-score and CIDEr), meaning (the quality of descriptions for a domain expert), and corpus (the diversity of generated captions). The paper shows that the diversity of generated captions has improved (from 0.07 reaching 0.18) with semantics-related losses that prioritize selected words. Semantics-related losses and the utilization of more visual features (optical flow, inpainting) improved the normalized captioning score by 28\%. The web page of this work: https://sites.google.com/view/soccercaptioning}{https://sites.google.com/view/soccercaptioning
translated by 谷歌翻译